max-sliced mutual information
Supplementary Materials for: Max-Sliced Mutual Information A Proofs
A.1 Proof of Proposition 1 We note that 1 is restated and was proved in [25, Appendix A.1] Proof of 2: Non-negativity directly follows by non-negativity of mutual information. Proof of 5: The proof relies on the independence of functions of independent random variables. This concludes the proof. 1 A.2 Proof of Proposition 2 By translation invariance of mutual information, we may assume w.l.o.g. that the means are Next, we show that we may equivalently optimize with the added unit variance constraint. Example 3.4]), we have I (A B) null, where the last equality uses the unit variance property and Schur's determinant formula. Armed with Lemma 1, we are in place to prove Proposition 2. Since the CCA solutions Theorem 2.2], which is restated next for completeness.
Max-Sliced Mutual Information
Quantifying dependence between high-dimensional random variables is central to statistical learning and inference. Two classical methods are canonical correlation analysis (CCA), which identifies maximally correlated projected versions of the original variables, and Shannon's mutual information, which is a universal dependence measure that also captures high-order dependencies. However, CCA only accounts for linear dependence, which may be insufficient for certain applications, while mutual information is often infeasible to compute/estimate in high dimensions. This work proposes a middle ground in the form of a scalable information-theoretic generalization of CCA, termed max-sliced mutual information (mSMI).
Max-Sliced Mutual Information
Quantifying dependence between high-dimensional random variables is central to statistical learning and inference. Two classical methods are canonical correlation analysis (CCA), which identifies maximally correlated projected versions of the original variables, and Shannon's mutual information, which is a universal dependence measure that also captures high-order dependencies. However, CCA only accounts for linear dependence, which may be insufficient for certain applications, while mutual information is often infeasible to compute/estimate in high dimensions. This work proposes a middle ground in the form of a scalable information-theoretic generalization of CCA, termed max-sliced mutual information (mSMI). It enjoys the best of both worlds: capturing intricate dependencies in the data while being amenable to fast computation and scalable estimation from samples. We show that mSMI retains favorable structural properties of Shannon's mutual information, like variational forms and identification of independence.